GMSA: Gathering Multiple Signatures Approach to Defend Against Code Injection Attacks

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Code Pointer Masking: Hardening Applications against Code Injection Attacks

In this paper we present an efficient countermeasure against code injection attacks. Our countermeasure does not rely on secret values such as stack canaries and protects against attacks that are not addressed by state-of-the-art countermeasures of similar performance. By enforcing the correct semantics of code pointers, we thwart attacks that modify code pointers to divert the application’s co...

متن کامل

Multilayer Approach to Defend Phishing Attacks

Spam messes up users inbox, consumes resources and spread attacks like DDoS, MiM, phishing etc. Phishing is a byproduct of email and causes financial loss to users and loss of reputation to financial institutions. In this paper we examine the characteristics of phishing and technology used by Phishers. In order to counter anti-phishing technology, phishers change their mode of operation; theref...

متن کامل

Countering Code Injection Attacks: A Unified Approach

Code injection exploits a software vulnerability through which a malicious user can make an application run unauthorized code. Server applications frequently employ dynamic and domain-specific languages, which are used as vectors for the attack. We propose a generic approach that prevents the class of injection attacks involving these vectors: our scheme detects attacks by using location-specif...

متن کامل

Defend encryption systems against side- channel attacks

From its ancient origin as a tool for protecting sensitive wartime or espionage-related messages, cryptography has become a foundational building-block for securing the systems, protocols, and infrastructure that underpin our modern interconnected world. But the physical mechanisms used in performing encryption and decryption can leak information, making it possible to bypass this security. Pro...

متن کامل

Divide, Denoise, and Defend against Adversarial Attacks

Deep neural networks, although shown to be a successful class of machine learning algorithms, are known to be extremely unstable to adversarial perturbations. Improving the robustness of neural networks against these attacks is important, especially for security-critical applications. To defend against such attacks, we propose dividing the input image into multiple patches, denoising each patch...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Access

سال: 2018

ISSN: 2169-3536

DOI: 10.1109/access.2018.2884201